-
Notifications
You must be signed in to change notification settings - Fork 298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lc0/cmake #552
base: next
Are you sure you want to change the base?
Conversation
* c++14 * added boost dependancy * still need to enable tests * remove currently required cuda * disabled tensorflow build * enabled all warnings
ninja | ||
mkdir build | ||
pushd build | ||
cmake .. -DCMAKE_BUILD_TYPE=Release |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's worth adding comment which other options are possible.
(I need debugoptimized, as that's on what I usually run profiler)
Also is NDEBUG defined by default? It would be nice to have a comment how to control that (it's faster with NDEBUG because there are asserts here and there, but for myself I like to have it)
@@ -115,7 +115,7 @@ class ChessBoard { | |||
|
|||
BitBoard ours() const { return our_pieces_; } | |||
BitBoard theirs() const { return their_pieces_; } | |||
BitBoard pawns() const { return pawns_ * kPawnMask; } | |||
BitBoard pawns() const; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function won't be able to inline, without linker-level inlining, which not all compilers have. It's fine to keep it like that though, I guess, should not have any noticeable performance impact.
lc0/src/chess/board.cc
Outdated
@@ -169,6 +169,8 @@ static const Move::Promotion kPromotions[] = { | |||
Move::Promotion::Knight, | |||
}; | |||
|
|||
static const BitBoard kPawnMask = 0x00FFFFFFFFFFFF00ULL; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Static is not necessary inside anonymous namespace. Not a large difference, but we don't have it elsewhere. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have it in all the declarations above it :) So I thought I'd be consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops sorry, too much nitpicking from my side. :)
lc0/src/neural/cuda/network_cudnn.cu
Outdated
@@ -921,7 +922,7 @@ class CudnnNetwork : public Network { | |||
} | |||
|
|||
// get rid of the BN layer by adjusting weights and biases of the | |||
// convolution idea proposed by Henrik Forst�n and first implemented in | |||
// convolution idea proposed by Henrik Forst�n and first implemented in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broken UTF-8 character.
lc0/src/utils/transpose.cc
Outdated
@@ -26,7 +26,7 @@ void TransposeTensor(const std::vector<int>& dims, std::vector<int> order, | |||
} | |||
std::vector<int> cur_idx(dims.size()); | |||
for (int _ = 0; _ < from.size(); ++_) { | |||
size_t from_idx = 0; | |||
std::size_t from_idx = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's better to replace it with int
then.
Seems like it's the only place in the code where I decided to use size_t, everywhere else int is used.
lc0/CMakeLists.txt
Outdated
endif (NOT CMAKE_BUILD_TYPE) | ||
|
||
# See if we can set optimization flags as expected. | ||
if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to additionally enable -Wthread-annotations
when compiling with clang? Thread annotations are really useful in multithreaded code.
lc0/CMakeLists.txt
Outdated
find_package (CUDA) | ||
find_package (CUDNN) | ||
find_package (Boost REQUIRED filesystem) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would be the easiest way to link it with profiler?
(I need it to link with libprofiler, and with --no-as-needed. Without profiler, --as-needed is fine)
@@ -0,0 +1,11 @@ | |||
add_library (mcts SHARED |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does SHARED mean here? Will it build a bunch of .so's ?
Maybe it makes sense to make it STATIC (will build.a) or MODULE (I don't know what the last means, but suppose it will not create library at all, but instead feeds all .o directly to linker)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently it will build a bunch of .so's indeed. I'm not familiar with MODULE, but static doesn't fail with an incorrect dependency tree.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't or does? If doesn't, why not change it? :)
ninja | ||
mkdir build | ||
pushd build | ||
cmake .. -DCMAKE_BUILD_TYPE=Release |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, on Ubuntu 17.10, I had to add -DCUDA_HOST_COMPILER=/usr/bin/g++-6
since CUDA doesn't support the default GCC7 (don't know if that can be automated).
This gives the project full cmake support. It will optionally add in cuda/cudnn neural network support.